miχpods: MOM6
Contents
miχpods: MOM6#
baseline KPP missing
bootstrap error on mean, median?
Try daily vs hourly
fix EUC max at 110
figure out time base
shear spectra
check eps vs Ri histogram
I think my Ri is too low?
Notes:#
Diagnostic notes/improvements#
match vertical resolution and extent
restrict TAO S2 to ADCP depth range
restrict to top 200m.
Filtering out MLD makes a big difference!
SInce we’re working with derivatives does the vertical grid matter (as long as it is coarser than observations)?
1 difvho ocean_vertical_heat_diffusivity
2 difvso ocean_vertical_salt_diffusivity
Future model simulations#
Need lateral / neutral terms for total χ, ε
Add Atlantic TAO moorings + LOTUS, PAPA, others used in LMD94
Fix restart issue
Uncertainties#
Why can’t I reproduce the ε figure?
Why can’t I save the new station data
Need to consider Ri smoothing in models: From Gustavo:
You mentioned some kind of inconsistency between your diagnostics and what MOM6 was doing for the interior shear mixing scheme. I am working to implement the option to apply vertical smoothing in Ri multiple times, instead of only once which is what we are doing right now. I noticed that the diagnostic
ri_grad_shearis saved before the smoothing is applied. There is another diagnostic (ri_grad_shear_smooth) that saves the smoothed Ri. Perhaps you were looking at ri_grad_shear instead of ri_grad_shear_smooth and this can explain the inconsistency.
Diagnostics Effort notes#
go through
prepare; pass in dataset and optionalxgcm.Griddifferent pre-processing steps for different datasets
order, sign of z coordinate is painful; here normalizing
composing with matplotlib subfigures
References#
Warner & Moum (2019)
Setup#
%load_ext autoreload
%load_ext rich
%load_ext watermark
import datetime
import glob
import os
import warnings
import tqdm
import cf_xarray as cfxr
import dask
import datatree
import dask_jobqueue
import dcpy
import distributed
import flox.xarray
import gsw_xarray
import holoviews as hv
import hvplot.xarray
import matplotlib as mpl
import matplotlib.pyplot as plt
import mom6_tools
import numpy as np
import pandas as pd
import xarray as xr
import xrft
import xgcm
from holoviews import opts
from datatree import DataTree
%aimport pump
from pump import mixpods
hv.notebook_extension("bokeh")
cfxr.set_options(warn_on_missing_variables=False)
xr.set_options(keep_attrs=True, display_expand_data=False)
plt.style.use("bmh")
plt.rcParams["figure.dpi"] = 140
gcmdir = "/glade/campaign/cgd/oce/people/bachman/TPOS_1_20_20_year/OUTPUT/" # MITgcm output directory
stationdirname = gcmdir
mixing_layer_depth_criteria = {
"boundary_layer_depth": {"name": "KPPhbl|KPP_OBLdepth|ePBL_h_ML"},
}
# MOM6 run catalog
catalog = {
"baseline": (
"baseline",
"gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods",
),
"epbl": ("ePBL", "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.epbl.001.mixpods"),
"kpp.lmd.002": (
"KPP Ri0=0.5",
"gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.002.mixpods",
),
"kpp.lmd.003": (
"KPP Ri0=0.5, Ric=0.2,",
"gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.003.mixpods",
),
"kpp.lmd.004": (
"KPP ν0=2.5, Ric=0.2, Ri0=0.5",
"gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods",
),
"kpp.lmd.004.N150": (
"KPP ν0=2.5, Ric=0.2, Ri0=0.5, N=150",
# Anna's run symlinked in.
"gmom.e23.GJRAv3.TL319_t061_zstar_N150.kpp.lmd.004.mixpods",
),
}
%watermark -iv
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
The rich extension is already loaded. To reload it, use:
%reload_ext rich
The watermark extension is already loaded. To reload it, use:
%reload_ext watermark
pump : 0.1
cf_xarray : 0.7.7
dask_jobqueue: 0.7.3
flox : 0.6.7
mom6_tools : 0.0.post244
matplotlib : 3.6.3
hvplot : 0.8.2
distributed : 2023.1.0
xgcm : 0.6.1
xarray : 2023.1.0
holoviews : 1.15.4
xrft : 0.0.0
ipywidgets : 8.0.4
datatree : 0.0.11
dask : 2023.1.0
dcpy : 0.1.dev385+g121534c
tqdm : 4.64.1
pandas : 1.5.3
numpy : 1.23.5
gsw_xarray : 0.2.1
sys : 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0]
json : 2.0.9
if "client" in locals():
client.close()
del client
if "cluster" in locals():
cluster.close()
cluster = dask_jobqueue.PBSCluster(
cores=4, # The number of cores you want
memory="23GB", # Amount of memory
processes=1, # How many processes
queue="casper", # The type of queue to utilize (/glade/u/apps/dav/opt/usr/bin/execcasper)
local_directory="/local_scratch/pbs.$PBS_JOBID/dask/spill",
log_directory="/glade/scratch/dcherian/dask/", # Use your local directory
resource_spec="select=1:ncpus=4:mem=23GB", # Specify resources
project="ncgd0011", # Input your project ID here
walltime="02:00:00", # Amount of wall time
interface="ib0", # Interface to use
)
cluster.adapt(minimum_jobs=2, maximum_jobs=8)
# cluster.scale(4)
client = distributed.Client(cluster)
client
Client
Client-59794b53-a7ed-11ed-a7dd-3cecef1b137e
| Connection method: Cluster object | Cluster type: dask_jobqueue.PBSCluster |
| Dashboard: https://jupyterhub.hpc.ucar.edu/stable/user/dcherian/proxy/8787/status |
Cluster Info
PBSCluster
2c7edd0f
| Dashboard: https://jupyterhub.hpc.ucar.edu/stable/user/dcherian/proxy/8787/status | Workers: 0 |
| Total threads: 0 | Total memory: 0 B |
Scheduler Info
Scheduler
Scheduler-d9622e35-c3d8-4a69-a1b0-e654dd4780ca
| Comm: tcp://10.12.206.12:37244 | Workers: 0 |
| Dashboard: https://jupyterhub.hpc.ucar.edu/stable/user/dcherian/proxy/8787/status | Total threads: 0 |
| Started: Just now | Total memory: 0 B |
Workers
Read data#
LES#
les = mixpods.load_les_moorings()
--> The keys in the returned dictionary of datasets are constructed as follows:
'latitude.longitude.month.kind.length'
microstructure#
micro = mixpods.load_microstructure()
micro
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: invalid value encountered in log10
result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: invalid value encountered in log10
result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: divide by zero encountered in log10
result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: invalid value encountered in log10
result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: invalid value encountered in log10
result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: invalid value encountered in log10
result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: invalid value encountered in log10
result_data = func(*input_data)
<xarray.DatasetView>
Dimensions: ()
Data variables:
*empty*TAO#
tao_Ri = xr.load_dataarray(
"tao-hourly-Ri-seasonal-percentiles.nc"
).cf.guess_coord_axis()
with dask.config.set(scheduler="threads"):
tao_gridded = mixpods.load_tao()
tao_gridded
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/dataset.py:256: UserWarning: The specified Dask chunks separate the stored chunks along dimension "depth" starting at index 42. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/dataset.py:256: UserWarning: The specified Dask chunks separate the stored chunks along dimension "time" starting at index 199726. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/dataset.py:256: UserWarning: The specified Dask chunks separate the stored chunks along dimension "longitude" starting at index 2. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
<xarray.Dataset>
Dimensions: (time: 212302, depth: 61, depthchi: 7)
Coordinates: (12/19)
deepest (time) float64 dask.array<chunksize=(212302,), meta=np.ndarray>
* depth (depth) float64 -300.0 -295.0 -290.0 ... -10.0 -5.0 0.0
eucmax (time) float64 dask.array<chunksize=(33130,), meta=np.ndarray>
latitude float32 0.0
longitude float32 -140.0
mld (time) float64 dask.array<chunksize=(212302,), meta=np.ndarray>
... ...
oni (time) float32 -0.9 -0.9 -0.9 -0.9 ... nan nan nan nan
en_mask (time) bool False False False ... False False False
ln_mask (time) bool True True True True ... False False False
warm_mask (time) bool True True True True ... True True True True
cool_mask (time) bool False False False ... False False False
enso_transition (time) <U12 'La-Nina warm' ... '____________'
Data variables: (12/38)
N2 (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
N2T (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
Ri (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
Rig_T (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
S (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
S2 (time, depth) float32 dask.array<chunksize=(33130, 61), meta=np.ndarray>
... ...
shred2 (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
Rig (time, depth) float64 dask.array<chunksize=(33130, 61), meta=np.ndarray>
sst (time) float64 dask.array<chunksize=(33130,), meta=np.ndarray>
Tflx_dia_diff (time, depthchi) float64 nan nan nan nan ... nan nan nan
Jb (time, depthchi, depth) float64 dask.array<chunksize=(33130, 7, 61), meta=np.ndarray>
Rif (time, depthchi, depth) float64 dask.array<chunksize=(33130, 7, 61), meta=np.ndarray>
Attributes:
CREATION_DATE: 23:26 24-FEB-2021
Data_Source: Global Tropical Moored Buoy Array Project O...
File_info: Contact: Dai.C.McClurg@noaa.gov
Request_for_acknowledgement: If you use these data in publications or pr...
_FillValue: 1.0000000409184788e+35
array: TAO/TRITON
missing_value: 1.0000000409184788e+35
platform_code: 0n165e
site_code: 0n165e
wmo_platform_code: 52321Develop ε-τ diagnostic#
tao_gridded.eps.load().count()
<xarray.DataArray 'eps' ()>
56948
Coordinates:
latitude float32 0.0
longitude float32 -140.0
reference_pressure int64 0
Attributes: (12/14)
long_name: turbulence dissipation rate
standard_name: specific_turbulent_kinetic_energy_dissipation_in_...
ncei_name: turbulent kinetic energy (TKE) dissipation rate
units: W kg-1
FillValue: -9999
valid_min: 1e-12
... ...
grid_mapping: crs
source: inferred from fast thermistor spectral scaling
references: Moum J.N. and J.D. Nash, Mixing measurements on a...
cell_methods: depth: point, time: mean
platform: mooring
instrument: chipodtao_gridded.taux.load().count()
<xarray.DataArray 'taux' ()>
19032
Coordinates:
latitude float32 0.0
longitude float32 -140.0
reference_pressure int64 0
Attributes:
FORTRAN_format:
epic_code: 442
generic_name: tx
long_name: ZONAL WIND STRESS
name: TX
standard_name: surface_downward_eastward_stress
units: N/m2def grouped_median(
group_idx, array, *, axis=-1, size=None, fill_value=None, dtype=None
):
import numpy_groupies as npg
return npg.aggregate_numpy.aggregate(
group_idx,
array,
func=np.nanmedian,
axis=axis,
size=size,
fill_value=fill_value,
dtype=dtype,
)
agg_median = flox.aggregations.Aggregation(
name="median",
numpy=grouped_median,
fill_value=-1,
chunk=None,
combine=None,
)
eps_tau_median = flox.xarray.xarray_reduce(
tao_gridded.eps.load(),
tao_gridded.time.dt.hour,
tao_gridded.tau.load(),
expected_groups=(None, pd.IntervalIndex.from_breaks([0, 0.04, 0.075, 5])),
func=agg_median,
)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/numpy/lib/nanfunctions.py:1217: RuntimeWarning: All-NaN slice encountered
r, k = function_base._ureduce(a, func=_nanmedian, axis=axis, out=out,
eps_tau_median.plot(x="hour", hue="tau", row="depthchi")
<xarray.plot.facetgrid.FacetGrid object at 0x2b589a6d6110>
np.log10(tao_gridded.eps).sel(time=slice("2005", "2015")).resample(
time="M"
).mean().hvplot.quadmesh(clim=(-7.5, -6))
sub.v.hvplot()
sub = tao_gridded.sel(time="2010-01")
t = sub.theta.hvplot.quadmesh(cmap="turbo_r")
dt = (
sub.theta - sub.theta.reset_coords(drop=True).cf.sel(Z=[0, -5]).cf.max("Z")
).hvplot.quadmesh(clim=(-0.15, 0.15), cmap="RdBu_r")
newmld = mixpods.get_mld_tao_theta(sub.theta.reset_coords(drop=True))
(
dt
* sub.reset_coords().mldT.hvplot.line(color="w", line_width=2)
* newmld.reset_coords(drop=True).hvplot.line(color="orange", line_width=1)
).opts(frame_width=1200)
(
tao_gridded.reset_coords().mldT.resample(time="5D").mean().hvplot.line()
* mixpods.get_mld_tao_theta((tao_gridded.reset_coords().theta))
.resample(time="5D")
.mean()
.hvplot.line()
)
tao_gridded.u.cf.plot()
tao_gridded.eucmax.plot()
[<matplotlib.lines.Line2D>]
MITgcm stations#
stations = pump.model.read_stations_20(stationdirname)
gcmeq = stations.sel(
longitude=[-155.025, -140.025, -125.025, -110.025], method="nearest"
)
# enso = pump.obs.make_enso_mask()
# mitgcm["enso"] = enso.reindex(time=mitgcm.time.data, method="nearest")
# gcmeq["eucmax"] = pump.calc.get_euc_max(gcmeq.u)
# pump.calc.calc_reduced_shear(gcmeq)
# oni = pump.obs.process_oni()
# gcmeq["enso_transition"] = mixpods.make_enso_transition_mask(oni).reindex(time=gcmeq.time.data, method="nearest")
mitgcm = gcmeq.sel(longitude=-140.025, method="nearest")
metrics = mixpods.normalize_z(pump.model.read_metrics(stationdirname), sort=True)
mitgcm = mixpods.normalize_z(mitgcm, sort=True)
mitgcm_grid = xgcm.Grid(
metrics.sel(latitude=mitgcm.latitude, longitude=mitgcm.longitude, method="nearest"),
coords=({"Z": {"center": "depth", "outer": "RF"}, "Y": {"center": "latitude"}}),
metrics={"Z": ("drF", "drC")},
periodic=False,
boundary="fill",
fill_value=np.nan,
)
mitgcm.theta.attrs["standard_name"] = "sea_water_potential_temperature"
mitgcm.salt.attrs["standard_name"] = "sea_water_salinity"
mitgcm["KPPviscAz"].attrs["standard_name"] = "ocean_vertical_viscosity"
mitgcm["KPPdiffKzT"].attrs["standard_name"] = "ocean_vertical_heat_diffusivity"
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/indexing.py:1374: PerformanceWarning: Slicing is producing a large chunk. To accept the large
chunk and silence this warning, set the option
>>> with dask.config.set(**{'array.slicing.split_large_chunks': False}):
... array[indexer]
To avoid creating the large chunks, set the option
>>> with dask.config.set(**{'array.slicing.split_large_chunks': True}):
... array[indexer]
return self.array[key]
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/indexing.py:1374: PerformanceWarning: Slicing is producing a large chunk. To accept the large
chunk and silence this warning, set the option
>>> with dask.config.set(**{'array.slicing.split_large_chunks': False}):
... array[indexer]
To avoid creating the large chunks, set the option
>>> with dask.config.set(**{'array.slicing.split_large_chunks': True}):
... array[indexer]
return self.array[key]
mitgcm = (
mixpods.prepare(mitgcm, grid=mitgcm_grid, oni=pump.obs.process_oni())
.sel(latitude=0, method="nearest")
.cf.sel(Z=slice(-250, 0))
)
mitgcm_grid
<xgcm.Grid>
Z Axis (not periodic, boundary='fill'):
* center depth --> outer
* outer RF --> center
Y Axis (not periodic, boundary='fill'):
* center latitude
mitgcm.u.cf.plot()
mitgcm.mldT.reset_coords(drop=True).cf.plot()
mitgcm.eucmax.reset_coords(drop=True).cf.plot()
[<matplotlib.lines.Line2D>]
mixpods.plot_n2s2pdf(mitgcm.n2s2pdf.sel(enso_transition_phase="none"))
<matplotlib.contour.QuadContourSet>
MOM6#
calculate ONI#
(
oniobs.hvplot.line(x="time", label="obs")
* onimom6.hvplot.line(x="time", label="MOM6")
).opts(ylabel="ONI [°C]")
oniobs.enso_transition_mask.plot()
onimom6.enso_transition_mask.plot(color="r")
[<matplotlib.lines.Line2D>]
MOM6 sections#
Combine sections#
catalog
{ 'baseline': ('baseline', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods'), 'epbl': ('ePBL', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.epbl.001.mixpods'), 'kpp.lmd.002': ( 'KPP Ri0=0.5', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.002.mixpods' ), 'kpp.lmd.003': ( 'KPP Ri0=0.5, Ric=0.2,', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.003.mixpods' ), 'kpp.lmd.004': ( 'KPP ν0=2.5, Ric=0.2, Ri0=0.5', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods' ), 'kpp.lmd.004.N150': ( 'KPP ν0=2.5, Ric=0.2, Ri0=0.5, N=150', 'gmom.e23.GJRAv3.TL319_t061_zstar_N150.kpp.lmd.004.mixpods' ) }
casename = "gmom.e23.GJRAv3.TL319_t061_zstar_N150.kpp.lmd.004.mixpods"
ds = mixpods.read_mom6_sections(casename)
100%|██████████| 39/39 [02:32<00:00, 3.90s/it]
ds.drop_vars(
["average_DT", "average_T2", "average_T1", "time_bnds"], errors="ignore"
).chunk({"time": 24 * 365}).to_zarr(
f"/glade/scratch/dcherian/archive/{casename}/ocn/moorings/tao.zarr",
consolidated=True,
mode="w",
)
<xarray.backends.zarr.ZarrStore object at 0x2aafa7cb0c10>
reload = mixpods.load_mom6_sections(casename)
reload.uo.count().compute()
mixpods.mom6_sections_to_zarr(casename)
casename = "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods"
mixpods.mom6_sections_to_zarr(casename)
casename = "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.epbl.001.mixpods"
mixpods.mom6_sections_to_zarr(casename)
100%|██████████| 31/31 [01:50<00:00, 3.57s/it]
/glade/u/home/dcherian/pump/pump/mixpods.py:887: RuntimeWarning: Converting a CFTimeIndex with dates from a non-standard calendar, 'noleap', to a pandas.DatetimeIndex, which uses dates from the standard calendar. This may lead to subtle errors in operations that depend on the length of time between dates.
mom6tao["time"] = mom6tao.indexes["time"].to_datetimeindex()
casename = "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.002.mixpods"
mixpods.mom6_sections_to_zarr(casename)
casename = "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.003.mixpods"
mixpods.mom6_sections_to_zarr(casename)
100%|██████████| 38/38 [02:07<00:00, 3.36s/it]
/glade/u/home/dcherian/pump/pump/mixpods.py:900: RuntimeWarning: Converting a CFTimeIndex with dates from a non-standard calendar, 'noleap', to a pandas.DatetimeIndex, which uses dates from the standard calendar. This may lead to subtle errors in operations that depend on the length of time between dates.
mom6tao["time"] = mom6tao.indexes["time"].to_datetimeindex()
casename = "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods"
mixpods.mom6_sections_to_zarr(casename)
Read sections#
dask.config.set(scheduler=client)
m = mixpods.read_mom6_sections(
"gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods"
)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/dask_jobqueue/core.py:255: FutureWarning: job_extra has been renamed to job_extra_directives. You are still using it (even if only set to []; please also check config files). If you did not set job_extra_directives yet, job_extra will be respected for now, but it will be removed in a future release. If you already set job_extra_directives, job_extra is ignored and you can remove it.
warnings.warn(warn, FutureWarning)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/dask_jobqueue/core.py:274: FutureWarning: env_extra has been renamed to job_script_prologue. You are still using it (even if only set to []; please also check config files). If you did not set job_script_prologue yet, env_extra will be respected for now, but it will be removed in a future release. If you already set job_script_prologue, env_extra is ignored and you can remove it.
warnings.warn(warn, FutureWarning)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/dask_jobqueue/pbs.py:82: FutureWarning: project has been renamed to account as this kwarg was used wit -A option. You are still using it (please also check config files). If you did not set account yet, project will be respected for now, but it will be removed in a future release. If you already set account, project is ignored and you can remove it.
warnings.warn(warn, FutureWarning)
100%|██████████| 21/21 [02:14<00:00, 6.42s/it]
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/coding/times.py:360: FutureWarning: Index.ravel returning ndarray is deprecated; in a future version this will return a view on self.
sample = dates.ravel()[0]
/glade/u/home/dcherian/pump/pump/mixpods.py:847: RuntimeWarning: Converting a CFTimeIndex with dates from a non-standard calendar, 'noleap', to a pandas.DatetimeIndex, which uses dates from the standard calendar. This may lead to subtle errors in operations that depend on the length of time between dates.
mom6tao["time"] = mom6tao.indexes["time"].to_datetimeindex()
/glade/u/home/dcherian/pump/pump/mixpods.py:859: UserWarning: Kv_v not present. Assuming equal to Kv_u
warnings.warn("Kv_v not present. Assuming equal to Kv_u")
m.drop_vars(["average_DT", "average_T1", "average_T2", "time_bnds"]).chunk(
{"time": 365 * 24}
).to_zarr(
"/glade/scratch/dcherian/archive/gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods/ocn/moorings/tao.zarr",
mode="w",
)
m.sel(yh=0, method="nearest")[["ePBL_h_ML", "mlotst"]].to_array().hvplot.line(
by="variable", x="time"
)
m.sel(yh=0, method="nearest")[["Kd_heat", "Kd_ePBL"]].to_array().hvplot.line(
by="variable", groupby="time", logy=True, ylim=(1e-6, 1e-1), xlim=(0, 500)
)
mom6140 = mixpods.load_mom6_sections(casename).load()
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xgcm/grid.py:1471: UserWarning: Metric at ('time', 'zi', 'yh', 'xh') being interpolated from metrics at dimensions ('time', 'zl', 'yh', 'xh'). Boundary value set to 'extend'.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xgcm/grid.py:1471: UserWarning: Metric at ('time', 'zi', 'yh', 'xh') being interpolated from metrics at dimensions ('time', 'zl', 'yh', 'xh'). Boundary value set to 'extend'.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xgcm/grid.py:1471: UserWarning: Metric at ('time', 'zi', 'yq', 'xh') being interpolated from metrics at dimensions ('time', 'zl', 'yh', 'xh'). Boundary value set to 'extend'.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/cf_xarray/accessor.py:1638: UserWarning: Variables {'areacello'} not found in object but are referred to in the CF attributes.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xgcm/grid.py:1471: UserWarning: Metric at ('time', 'zi', 'yh', 'xh') being interpolated from metrics at dimensions ('time', 'zl', 'yh', 'xh'). Boundary value set to 'extend'.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xgcm/grid.py:1471: UserWarning: Metric at ('time', 'zi', 'yq', 'xh') being interpolated from metrics at dimensions ('time', 'zl', 'yh', 'xh'). Boundary value set to 'extend'.
warnings.warn(
mom6140.uo.cf.plot(robust=True)
mom6140.eucmax.cf.plot()
mom6140.mldT.cf.plot(lw=0.5, color="k")
mixpods.plot_n2s2pdf(mom6140.n2s2pdf.sel(enso_transition_phase="none"))
<matplotlib.contour.QuadContourSet>
Pick simulations#
import warnings
datasets = {
"TAO": tao_gridded,
# "MITgcm": mitgcm,
}
catalog_sub = {k: catalog[k] for k in catalog if "004" in k or "baseline" in k}
display(catalog_sub)
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UserWarning)
warnings.simplefilter("ignore", category=FutureWarning)
for short_name, (long_name, folder) in tqdm.tqdm(catalog_sub.items()):
datasets.update(
{
short_name: mixpods.load_mom6_sections(folder).assign_attrs(
{"title": long_name}
)
}
)
{ 'baseline': ('baseline', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods'), 'kpp.lmd.004': ( 'KPP ν0=2.5, Ric=0.2, Ri0=0.5', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods' ), 'kpp.lmd.004.N150': ( 'KPP ν0=2.5, Ric=0.2, Ri0=0.5, N=150', 'gmom.e23.GJRAv3.TL319_t061_zstar_N150.kpp.lmd.004.mixpods' ) }
100%|██████████| 3/3 [03:36<00:00, 72.24s/it]
tree = DataTree.from_dict(datasets)
# tree = tree.sel(time=slice('2000', '2017'))
ref = tree["TAO"].ds.reset_coords(drop=True).cf.sel(Z=slice(-120, None))
counts = np.minimum(ref["S2"].cf.count("Z"), ref["N2T"].cf.count("Z")).load()
def calc_histograms(ds):
ds = ds.copy()
ds["tao_mask"] = counts.reindex(time=ds.time, method="nearest") > 5
ds["tao_mask"].attrs = {
"description": "True when there are more than 5 5-m T, u, v in TAO dataset"
}
# ds = ds.where(ds.tao_mask)
return ds.update(mixpods.pdf_N2S2(ds))
tree = tree.map_over_subtree(calc_histograms)
if "les" in tree:
tree["les"] = les["0.0.-140.oct.mooring.month"]
tree["les"] = tree["les"].isel(z=slice(-2))
if "micro" in locals():
tree.update(micro)
mixpods.validate_tree(tree)
tree = mixpods.persist_tree(tree)
Verify depth is normalized#
for name, ds in datasets.items():
(ds.cf["sea_water_x_velocity"].cf["Z"].plot(marker=".", ls="none", label=name))
plt.legend()
<matplotlib.legend.Legend>
baseline.zi.diff("zi").plot(marker="x")
[<matplotlib.lines.Line2D>]
Compare EUC maximum and MLD#
Monthly climatology#
import tqdm
for node in tqdm.tqdm(tree.children):
tree[node]["mldT"] = tree[node]["mldT"].load()
100%|██████████| 3/3 [00:28<00:00, 9.35s/it]
def to_dataset(tree):
concat = xr.concat([v.ds for v in tree.children.values()], dim="node")
concat["node"] = list(tree.children.keys())
return concat
clim = to_dataset(
tree.map_over_subtree(
lambda x: x.reset_coords()[["mldT", "eucmax"]].groupby("time.month").mean()
)
).load()
clim
<xarray.Dataset>
Dimensions: (node: 3, month: 12)
Coordinates:
* month (month) int64 1 2 3 4 5 6 7 8 9 10 11 12
* node (node) <U16 'TAO' 'kpp.lmd.004' 'kpp.lmd.004.N150'
Data variables:
mldT (node, month) float64 -24.54 -16.38 -12.75 ... -21.87 -25.93 -31.85
eucmax (node, month) float64 -107.1 -100.9 -93.21 ... -103.6 -104.7 -103.6hv.Layout(
[
hv.Overlay(
[
tree[node]
.ds["mldT"]
.reset_coords(drop=True)
.groupby("time.month")[month]
.hvplot.hist(label=node, legend=True)
.opts(frame_width=150)
for node in tree.children
]
).opts(title=str(month))
for month in np.arange(1, 13)
]
).cols(4)
(clim.mldT.hvplot(by="node") + clim.eucmax.hvplot(by="node")).cols(1)
mixpods.plot_timeseries(tree, "mldT", obs="TAO")
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/bokeh/core/property/bases.py:259: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
return new == old
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/bokeh/core/property/bases.py:259: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
return new == old
datasets["TAO"].eucmax.attrs["long_name"] = "EUC maximum"
mixpods.plot_timeseries(tree.sel(time="2008"), "eucmax", obs="TAO")
MLD Maps#
def read_sfc(casename):
path = f"/glade/scratch/dcherian/archive/{casename}/ocn/hist/*sfc*00[4-7]*.nc"
sfc = xr.open_mfdataset(
path, data_vars="minimal", coords="minimal", compat="override", parallel=True
)
return sfc
sfc = DataTree()
casenames = {
"MOM6 KPP": "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods",
"MOM6 ePBL": "gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.epbl.001.mixpods",
}
for name, casename in casenames.items():
sfc[name] = DataTree(read_sfc(casename))
sfc
<xarray.DatasetView>
Dimensions: ()
Data variables:
*empty*mldclim = sfc.map_over_subtree(
lambda node: node.mlotst.groupby("time.month").mean()
).compute()
filepath = (
"/glade/work/gmarques/cesm/datasets/MLD/deBoyer/deBoyer_MLD_remapped_to_tx06v1.nc"
)
mldclim["obs"] = DataTree(xr.open_dataset(filepath))
mldclim
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/coding/times.py:699: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/indexing.py:524: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return np.asarray(array[self.key], dtype=None)
<xarray.DatasetView>
Dimensions: ()
Data variables:
*empty*mldclim.
mldclim["MOM6 KPP"] = DataTree(
read_sfc("gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods")
.mlotst.groupby("time.month")
.mean()
)
mldclim["MOM6 ePBL"] = DataTree(
read_sfc("gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.epbl.001.mixpods")
.mlotst.groupby("time.month")
.mean()
)
(
(mlotst - mldclim.mld.groupby("time.month").first())
.sel(yh=slice(-5, 5), xh=slice(-170, -80))
.plot(col="month", robust=True, col_wrap=3, aspect=2)
)
<xarray.plot.facetgrid.FacetGrid>
Mixing layer depth?#
todrop = ["TAO"]
notao = DataTree.from_dict(
{k: v.ds for k, v in tree.children.items() if k not in todrop}
)
with cfxr.set_options(custom_criteria=mixing_layer_criteria):
plot = mixpods.plot_timeseries(notao, "boundary_layer_depth", obs=None)
plot
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/cf_xarray/accessor.py:1663: UserWarning: Variables {'areacello'} not found in object but are referred to in the CF attributes.
warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/cf_xarray/accessor.py:1663: UserWarning: Variables {'areacello'} not found in object but are referred to in the CF attributes.
warnings.warn(
Mean profiles: mean, std#
Model mean S2 is lower by a factor of 2
kwargs = dict(
show_grid=True,
title="",
invert_axes=True,
legend_position="right",
frame_width=150,
frame_height=500,
)
limkwargs = dict(
ylim=(-1e-3, 1.9e-3),
xlim=(-250, 0),
)
def plot_profile_fill(datasets, var, label):
from datatree import DataTree
if isinstance(datasets, DataTree):
datasets = datasets.children
return mixpods.map_hvplot(
lambda ds, name, muted: mixpods.plot_profile_fill(
ds.ds.cf[var].load(), label=name, muted=muted
),
datasets,
).opts(**kwargs, ylabel=label)
S2 = plot_profile_fill(tree, "S2", "S^2")
N2 = plot_profile_fill(tree, "N2T", "N_T^2")
Ri = (
(
mixpods.map_hvplot(
lambda ds, name, muted: mixpods.cfplot(
ds.Rig_T.load().median("time"), name, muted=muted
),
datasets,
)
* hv.HLine(0.25).opts(color="k", line_width=0.5)
)
.opts(**kwargs, **limkwargs, ylabel="median Ri_g^T")
.opts(ylim=(0, 1))
)
T = plot_profile_fill(tree, "sea_water_potential_temperature", "T")
S = plot_profile_fill(tree, "sea_water_salinity", "S")
u = plot_profile_fill(tree, "sea_water_x_velocity", "u")
v = plot_profile_fill(tree, "sea_water_y_velocity", "v")
# dTdz = plot_profile_fill(tree, "dTdz", "∂T/∂z")
# dSdz = plot_profile_fill(tree, "dSdz", "∂S/∂z")
((S2 + N2).opts("Curve", xrotation=30) + Ri.opts(xaxis="top", xrotation=30)).opts(
sizing_mode="stretch_width"
)
((S2 + N2).opts("Curve", xrotation=30) + Ri.opts(xaxis="top", xrotation=30)).opts(
sizing_mode="stretch_width"
)
(T + u.opts(ylim=(-0.5, 1.5))).opts(sizing_mode="stretch_width")
(T + S + u.opts(ylim=(-0.5, 1.5)) + v.opts(ylim=(-0.5, 0.5))).opts(
sizing_mode="stretch_width"
)
Remap to EUC coordinate#
gcmeq.coords["zeuc"] = gcmeq.depth - gcmeq.eucmax
remapped = flox.xarray.xarray_reduce(
gcmeq.drop_vars(["SSH", "KPPhbl", "mld", "eucmax"], errors="ignore"),
"zeuc",
dim="depth",
func="mean",
expected_groups=(np.arange(-250, 250.1, 5),),
isbin=(True,),
)
remapped["zeuc_bins"].attrs["units"] = "m"
remapped
<xarray.Dataset>
Dimensions: (time: 174000, longitude: 4, zeuc_bins: 100)
Coordinates:
latitude float64 0.025
* longitude (longitude) float64 -155.0 -140.0 -125.0 -110.0
* time (time) datetime64[ns] 1998-12-31T18:00:00 ... 2018-11-06...
cool_mask (time) bool True True True True ... False False False False
warm_mask (time) bool False False False False ... True True True True
* zeuc_bins (zeuc_bins) object (-250.0, -245.0] ... (245.0, 250.0]
Data variables: (12/25)
DFrI_TH (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
KPPdiffKzT (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
KPPg_TH (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
KPPviscAz (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
VISrI_Um (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
VISrI_Vm (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
... ...
S2 (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
shear (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
N2 (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
shred2 (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
Ri (time, longitude, zeuc_bins) float32 dask.array<chunksize=(6000, 1, 100), meta=np.ndarray>
enso_transition (zeuc_bins, time) <U12 'La-Nina cool' ... 'El-Nino warm'
Attributes:
easting: longitude
northing: latitude
title: Station profile, index (i,j)=(1201,240)remapped = remapped.persist()
cluster.scale(6)
Seasonal median Ri: (0, 140)#
remapped["longitude"] = [-155.0, -140, -125, -110]
fg = (
remapped.Ri.groupby("time.season")
.median()
.sel(season=["DJF", "MAM", "JJA", "SON"], longitude=[-140, -110])
.cf.plot(
col="season", row="longitude", xlim=(0, 1.5), ylim=(-20, None), label="MITgcm"
)
)
fg.data = tao_Ri.cf.sel(quantile=0.5, longitude=[-140, -110])
fg.map_dataarray_line(
xr.plot.line, x=None, y="zeuc", hue="season", color="k", label="TAO"
)
fg.map(lambda: plt.axvline(0.25))
fg.axes[-1, -1].legend(bbox_to_anchor=(1.5, 1))
<matplotlib.legend.Legend>
1D Climatological heat budget#
Using TAO
swnet,qnetmeasurements instead of Tropflux
tree["TAO"] = DataTree(tao_gridded)
mixpods.climo_heat_budget_1d(tree["TAO"].ds, penetration="moum")
mixpods.climo_heat_budget_1d(tree["TAO"].ds, penetration="mom")
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
f, ax = plt.subplots(
1,
len(tree),
constrained_layout=True,
squeeze=False,
sharex=True,
sharey=True,
figsize=(10, 3),
)
for axx, (name, node) in zip(ax.flat, tree.children.items()):
mixpods.plot_climo_heat_budget_1d(node.ds, mxldepth=-40, penetration="mom", ax=axx)
axx.set_title(name)
dcpy.plots.clean_axes(ax)
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
mixpods.plot_climo_heat_budget_1d(tree["kpp.lmd.004.N150"].ds, penetration="mom")
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
Stability diagram: 4N2-S2 plane#
Warner & Moum (2019):
Both velocity and temperature are hourly averaged before interpolation to 5-m vertical bins
Contours enclose 50% of data
Compare daily vs hourly frequency#
resampled = tree.from_dict({k: v.resample(time="D").mean() for k,v in datasets.items()})
#resampled["TAO"].update(mixpods.prepare(resampled["TAO"].ds, oni=mixpods.process_oni()))
for name, node in resampled.children.items():
node.update(mixpods.prepare(node.ds, oni=tree[name]["oni"].resample(time="D").mean().reset_coords(drop=True)))
node.update(mixpods.pdf_N2S2(node.ds))
mixpods.validate_tree(node)
resampled = mixpods.load_tree(resampled)
tree = mixpods.load_tree(tree)
fig = plt.figure(constrained_layout=True, figsize=(12, 6))
subfigs = fig.subfigures(2, 1)
mixpods.plot_stability_diagram_by_dataset(tree, fig=subfigs[0])
mixpods.plot_stability_diagram_by_dataset(resampled, fig=subfigs[1])
Older#
mixpods.plot_stability_diagram_by_dataset(datasets)
mixpods.plot_stability_diagram_by_phase(datasets, obs="TAO", fig=None)
Composed#
excluding MLD#
have not matched vertical grid spacing yet.
minor axis is smaller for increasing shear with KPP
prandtl number
interpolate to TAO grid
hybrid HYCOM-like case may have equatorial refinement of vertical grid above EUC
compare hybrid coordinate case resolution
instrumental error on vertical gradients for S2, N2;
if added to model, would it change the bottom tail.
Horizontal or vertical res:
MOM6 at 5km? hah
TIW compositing? La-NIna dec?
top end’s are similar, so maybe not?
sensitivity of cold tongue mean state
horizontal viscosity; controls EUC strength? (increase?)
vertical res; yan li’s paper
POP :thermocline always too diffuse and EUC too thick;
higher order interpolants can control amount diffusion when remapping
get S2, N2 from TPOSE?
Questions:
internal wave spectrum?
minor axis is smaller for model, particularly MITgcm-> too diffusive?
Can we think of this as the model relaxing to critical Ri too quickly
tree = mixpods.load_tree(tree)
fig = plt.figure(constrained_layout=True, figsize=(12, 6))
subfigs = fig.subfigures(2, 1)
mixpods.plot_stability_diagram_by_dataset(tree, fig=subfigs[0])
mixpods.plot_stability_diagram_by_phase(tree, fig=subfigs[1])
fig = plt.figure(constrained_layout=True, figsize=(12, 6))
subfigs = fig.subfigures(2, 1)
mixpods.plot_stability_diagram_by_dataset(tree, fig=subfigs[0])
mixpods.plot_stability_diagram_by_phase(tree, fig=subfigs[1])
including MLD#
fig = plt.figure(constrained_layout=True, figsize=(12, 6))
subfigs = fig.subfigures(2, 1, height_ratios=[1.2, 1])
top = subfigs[0].subfigures(1, 3, width_ratios=[1, 5, 1])
mixpods.plot_stability_diagram_by_dataset(datasets, fig=top[1])
mixpods.plot_stability_diagram_by_phase(datasets, fig=subfigs[1])
(
tao_gridded.n2s2pdf.sel(
enso_transition_phase=[
"La-Nina cool",
"La-Nina warm",
"El-Nino warm",
"El-Nino cool",
]
)
.sum("N2_bins")
.plot(hue="enso_transition_phase")
)
plt.figure()
(
tao_gridded.n2s2pdf.sel(
enso_transition_phase=[
"La-Nina cool",
"La-Nina warm",
"El-Nino warm",
"El-Nino cool",
]
)
.sum("S2_bins")
.plot(hue="enso_transition_phase")
)
[<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>]
oni = pump.obs.process_oni().sel(time=slice("2005-Oct", "2015"))
(
oni.hvplot.step(color="k")
+ pump.obs.make_enso_transition_mask()
.sel(time=slice("2005-Jun", "2015"))
.reset_coords(drop=True)
.hvplot.step()
).cols(1)
ε Ri TAO check#
data = tao_gridded.drop_vars("zeuc")
epsZ = data.eps.cf["Z"].cf.sel(Z=slice(-69, -29))
# χpod data are available at a subset of depths
Ri = np.log10(mixpods.reindex_Z_to(data.Rig_T, epsZ))
# Ri_bins = np.logspace(np.log10(0.025), np.log10(2), 11)
Ri_bins = np.arange(-1.6, 0.4, 0.2) # in log space from Sally
# Ri_bins = np.array([0.02, 0.04, 0.06, 0.1, 0.3, 0.5, 0.7, 1.5, 2])
Ri_kwargs = {
"expected_groups": (Ri_bins, None),
"isbin": (True, False),
}
mpl.rcParams["figure.dpi"] = 120
(
data.eps.count("depthchi").hvplot.line(x="time")
+ Ri.count("depthchi").hvplot.line(x="time")
).cols(1)
data = tao_gridded.eps_n2s2.sel(
enso_transition_phase=[
"El-Nino warm",
"La-Nina warm",
"El-Nino cool",
"La-Nina cool",
]
)
fg = data.to_dataset("stat").plot.scatter(
x="S2_bins",
y="N2_bins",
hue="mean", # markersize="count",
col="enso_transition_phase",
col_wrap=2,
norm=mpl.colors.LogNorm(1e-7, 1e-6),
cmap=mpl.cm.Greys,
robust=True,
xlim=(-4.5, -2.5),
ylim=(-4.5, -2.5),
)
fg.set_titles("{value}")
np.log10(tao_gridded.eps_ri).sel(stat="mean").plot(
hue="enso_transition_phase", ylim=(-7.5, -6)
)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/plot/plot.py:454: UserWarning: The handle <matplotlib.lines.Line2D object> has a label of '____________' which cannot be automatically added to the legend.
ax.legend(handles=primitive, labels=list(hueplt.to_numpy()), title=hue_label)
[<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>,
<matplotlib.lines.Line2D>]
Turbulence#
Dan’s suggestions#
I liked your effort to look at shear production= KmdU/dz dot dU/dz = dissipation. I mentioned that it might be worth looking at
buoyancy flux = KT dT/dz * db/dT + KS dS/dz * db/dSwith db/dT, db/dS from equation of state. I think I said KT*N^2, but I hope after checking that this is the same in this region (K_T~K_S). You could then divide buoyancy flux by 0.2 to estimate epsilon for comparison with your estimate based on shear production.
Dimensionless parameters. We alluded to this, but I think it would be interesting to explicitly quantify flux Richardson number Rif= buoyancy flux/shear production (as defined above). And Prt=KT/Km=Rif/Rig, where Rig is the usual gradient Richardson number. Following a similar logic, you might look at Reb= epsilon/nu N^2 with nu constant 1e-6 molecular viscosity. We discussed looking at diffusivity and viscosity in addition to epsilon. But note the similarity between dimensionless diffusivity KT/kappa and viscosity Km/nu, i.e. Reb=Rig*KM/nu. (with kappa a constant molecular diffusivity to make dimensionless)
I like how you continue to push what we can learn from the distributions in S2-N2 parameter space in addition to physical space. It is good for comparing with limited observations or process models. I think you can push this even further to develop metrics quantifying how the momentum and buoyancy fluxes “tango” with each other and with S2 and N2. In addition, perhaps it would be valuable to flip the perspective and look at shear-production-buoyancy flux space? Or consider various two-dimensional spaces of non-dimensional parameters mentioned above: Rig, Reb, Rif, KT/kappa, Km/nu.
Finally, I think it would be interesting to try to examine how momentum flux and shear production below the mixed layer relate to wind stress as well as the local shear. There might be a relationship that emerges in KPP, even if the K profile is formally dependent only on local Rig at these depths.
(
np.log10(np.abs(baseline.Rif.cf.sel(Z=slice(-110, 0))))
.reset_coords(drop=True)
.hvplot.hist(bins=np.linspace(-2, 1, 100))
) * hv.VLine(np.log10(0.2)).opts(line_color="k")
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:771: RuntimeWarning: divide by zero encountered in log10
result_data = func(*input_data)
ε-Jb histogram#
from xhistogram.xarray import histogram
jpdf = histogram(
np.log10(baseline.eps),
np.log10(0.2 * baseline.eps_Jb),
density=True,
bins=np.linspace(-10, -4, 100),
).compute()
jpdf.plot.contourf(levels=52, robust=True, cmap=mpl.cm.nipy_spectral)
dcpy.plots.line45()
from mom6_tools import wright_eos
mom6tao = baseline
mom6tao["α"] = wright_eos.alpha_wright_eos(mom6tao.thetao, mom6tao.so, p=0)
mom6tao["α"].attrs = {
"standard_name": "sea_water_thermal_expansion_coefficient",
"units": "kg m-3 C-1",
}
mom6tao["β"] = wright_eos.beta_wright_eos(mom6tao.thetao, mom6tao.so, p=0)
mom6tao["β"].attrs = {
"standard_name": "sea_water_haline_contraction_coefficient",
"units": "kg m-3",
}
(
baseline.Kd_heat * baseline.α * baseline.Tz
+ baseline.Kd_heat * baseline.β * baseline.Sz
)
<xarray.DataArray (time: 507264, zi: 27, zl: 27)>
dask.array<chunksize=(8760, 26, 27), meta=np.ndarray>
Coordinates: (12/14)
* time (time) datetime64[ns] 1958-01-06T00:30:00 ... 2016-12-31...
xh float64 -140.0
yh float64 0.0625
yq float64 -0.0625
* zi (zi) float64 -230.8 -212.0 -194.4 -177.8 ... -5.0 -2.5 -0.0
eucmax (time) float64 dask.array<chunksize=(8760,), meta=np.ndarray>
... ...
en_mask (time) bool False False False False ... False False False
ln_mask (time) bool False False False False ... False False False
warm_mask (time) bool True True True True ... True True True True
cool_mask (time) bool False False False False ... False False False
enso_transition (time) <U12 '____________' ... '____________'
* zl (zl) float64 -240.8 -221.4 -203.2 ... -6.25 -3.75 -1.25
Attributes:
cell_measures: area: areacello
cell_methods: area:mean zi:point yh:mean xh:mean time: mean
long_name: Total diapycnal diffusivity for heat at interfaces
time_avg_info: average_T1,average_T2,average_DT
units: m2 s-1Mean SST, Jq#
def flatten(tree):
varnames = next(iter(tree.children.values())).data_vars
out = xr.Dataset()
coord = xr.DataArray(list(tree.keys()), dims="node")
for var in varnames:
variables = (node[var] for name, node in tree.children.items())
out[var] = xr.concat(variables, dim=coord)
return out
sub = tree.map_over_subtree(
lambda node: (
node.cf[["Jq", "sea_surface_temperature", "ocean_vertical_heat_diffusivity"]]
.pipe(np.abs)
.cf.rename(
{"sea_surface_temperature": "sst", "ocean_vertical_heat_diffusivity": "KT"}
)
.cf.sel(Z=slice(-60, -20))
# .cf.mean("Z")
.cf.groupby("time.month")
.mean(["Z", "time"])
.load()
)
).pipe(flatten)
h = (
sub.Jq.hvplot.line(by="node", legend=True)
+ sub.sst.hvplot.line(by="node", logy=True, legend=True)
).cols(1)
h
vertical = tree.map_over_subtree(
lambda node: (
node.cf[["Jq", "ocean_vertical_heat_diffusivity"]]
.cf.rename({"ocean_vertical_heat_diffusivity": "KT"})
.cf.sel(Z=slice(-120, -20))
.groupby("time.season")
.mean()
.sel(season=["DJF", "MAM", "JJA", "SON"])
.load()
)
)
seasonal = {}
seasonal["Jq"] = mixpods.map_hvplot(
lambda ds, name, muted: (
ds["Jq"]
.cf.rename({"Z": "depth"})
.hvplot.line(y="depth", col="season", label=name, muted=muted)
),
vertical.children,
)
seasonal["KT"] = mixpods.map_hvplot(
lambda ds, name, muted: (
ds["KT"]
.cf.rename({"Z": "depth"})
.hvplot.line(logx=True, y="depth", col="season", label=name, muted=muted)
),
vertical.children,
)
def hvplot_facet(tree, varname, col, **kwargs):
by = next(iter(tree.children.values()))[col].data
handles = []
for by_ in by:
handles.append(
hv.Overlay(
[
node.ds.cf[varname]
.sel({col: by_})
.hvplot.line(label=name, **kwargs)
for name, node in tree.children.items()
]
)
)
return hv.Layout(handles)
proc = vertical.map_over_subtree(lambda node: node.cf.rename({"Z": "Z"}))
layout = hv.Layout(
[
hvplot_facet(
proc,
varname="Jq",
col="season",
),
hvplot_facet(
proc,
varname="ocean_vertical_heat_diffusivity",
col="season",
logx=True,
ylim=(1e-6, 1e-1),
),
]
)
layout.opts(
hv.opts.Curve(invert_axes=True, xrotation=20, frame_width=160, frame_height=300),
hv.opts.Overlay(legend_position="bottom_left", shared_axes=True, labelled=["x"]),
)
mixpods.map_hvplot(
lambda ds, name, muted: (ds["Tz"] < 0)
.sum("time")
.hvplot.line(label=name, muted=muted, invert=True),
tree,
)
np.log10(tree["kpp.lmd.004"]["Kd_heat"].resample(time="M").mean()).hvplot.quadmesh(
y="zi"
)
mixpods.map_hvplot(
lambda ds, name, muted: ds.ds.cf["ocean_vertical_heat_diffusivity"]
.mean("time")
.hvplot.line(label=name, muted=muted),
tree.children,
).collate().opts(
ylim=(1e-8, 1e1),
legend_position="right",
logx=True,
invert_axes=True,
frame_width=300,
aspect=1 / 3,
)
WARNING:param.OverlayPlot02012: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot02012: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
WARNING:param.OverlayPlot02072: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot02072: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
mixpods.map_hvplot(
lambda ds, name, muted: ds["Jq"].mean("time").hvplot.line(label=name, muted=muted),
tree.children,
).collate().opts(
legend_position="right", invert_axes=True, frame_width=300, aspect=1 / 3
)
Histograms: Ri, ε#
take out MLD?
def hvplot_step_hist(da, bins, name=None, **kwargs):
from xhistogram.xarray import histogram
return histogram(da, bins=bins, **kwargs).hvplot.step(label=name, group_label=name)
def plot_distributions(tree, var, bins, log=False):
identity = lambda x: x
transform = np.log10 if log else identity
def _subset(ds):
mask = ds.tao_mask.fillna(0).astype(bool)
ds = ds.cf[var].where(mask).reset_coords(drop=True).cf.sel(Z=slice(-69, -29))
return transform(ds)
subset = tree.map_over_subtree(_subset)
return mixpods.map_hvplot(
lambda node, name: hvplot_step_hist(
node.ds.reset_coords().cf[var], density=True, bins=bins, name=name
),
subset.children,
)
handles = [
plot_distributions(tree, "chi", bins=np.linspace(-11, -4, 101), log=True),
plot_distributions(tree, "eps", bins=np.linspace(-11, -4, 101), log=True),
plot_distributions(
tree, "ocean_vertical_heat_diffusivity", bins=np.linspace(-8, -1, 101), log=True
),
plot_distributions(tree, "Jq", bins=np.linspace(-1000, 0, 51), log=False),
plot_distributions(tree, "Rig_T", np.linspace(-0.5, 1.5, 61))
* hv.VLine(0.25).opts(line_color="k"),
]
hv.Layout(handles).opts("Overlay", frame_width=600).cols(2)
for node in ["baseline", "kpp.lmd.002", "kpp.lmd.003", "kpp.lmd.004"]:
plt.figure()
tree[node].sel(time=slice("2008-10-25", "2008-11-15")).ds.cf["ocean_vertical_heat_diffusivity"]
.cf.sel(Z=slice(-120, None)).reset_coords(drop=True).cf.plot(
cmap=mpl.cm.Spectral_r, size=3, aspect=5, norm=mpl.colors.LogNorm(1e-7, 1e-2)
)
<Figure size 896x672 with 0 Axes>
<Figure size 896x672 with 0 Axes>
<Figure size 896x672 with 0 Axes>
<Figure size 896x672 with 0 Axes>
for node in ["baseline", "kpp.lmd.002", "kpp.lmd.003", "kpp.lmd.004"]:
plt.figure()
tree[node].sel(time=slice("2008-10-25", "2008-11-15"))["eps"].cf.sel(
Z=slice(-120, None)
).reset_coords(drop=True).cf.plot(
cmap=mpl.cm.Spectral_r, size=3, aspect=5, norm=mpl.colors.LogNorm(1e-9, 1e-5)
)
<Figure size 896x672 with 0 Axes>
<Figure size 896x672 with 0 Axes>
<Figure size 896x672 with 0 Axes>
<Figure size 896x672 with 0 Axes>
Panel debug#
import panel as pn
print(layout)
:Layout
.Overlay.I :Overlay
.Curve.TAO :Curve [chi_bin] (histogram_chi)
.Curve.Baseline :Curve [chi_bin] (histogram_chi)
.Curve.Epbl :Curve [chi_bin] (histogram_chi)
.Curve.Kpp_full_stop_lmd_full_stop_002 :Curve [chi_bin] (histogram_chi)
.Curve.Kpp_full_stop_lmd_full_stop_003 :Curve [chi_bin] (histogram_chi)
.Curve.Kpp_full_stop_lmd_full_stop_004 :Curve [chi_bin] (histogram_chi)
.Overlay.II :Overlay
.Curve.TAO :Curve [eps_bin] (histogram_eps)
.Curve.Baseline :Curve [eps_bin] (histogram_eps)
.Curve.Epbl :Curve [eps_bin] (histogram_eps)
.Curve.Kpp_full_stop_lmd_full_stop_002 :Curve [eps_bin] (histogram_eps)
.Curve.Kpp_full_stop_lmd_full_stop_003 :Curve [eps_bin] (histogram_eps)
.Curve.Kpp_full_stop_lmd_full_stop_004 :Curve [eps_bin] (histogram_eps)
def change_muted(value):
present = layout.Overlay.I.Curve.children
normalized = [n.replace(".", "_full_stop_") for n in value]
selected = [n for n in normalized if n in present]
pass
def change_muted_test(value):
present = layout.Overlay.I.Curve.children
normalized = [n.replace(".", "_full_stop_") for n in value]
selected = [n for n in normalized if n in present]
pass
names = list(tree.children.keys())
checkbox = pn.widgets.CheckBoxGroup(value=names, options=names, inline=True)
checkbox.jslink(layout, {"value": change_muted})
debug = pn.widgets.StaticText(name="debug", value="default")
checkbox.jslink(debug, dict(value=change_muted_test))
pn.Column(debug, checkbox, layout.cols(1))
bins = np.linspace(-0.5, 1.5, 61)
handles = [
hvplot_step_hist(
ds["Rig_T"].reset_coords(drop=True).cf.sel(Z=slice(-69, -29)),
name=name,
bins=bins,
)
for name, ds in datasets.items()
]
(hv.Overlay(handles) * hv.VLine(0.25).opts(line_color="k")).opts(ylabel="PDF")
ε vs Ri#
Possible reasons for difference
My MLD seems deeper even though I use the same ΔT=0.15 threshold. It could be that they’ve used Akima splines to interpolate in the vertical
~I’ve filtered to DCL, so accounting for MLD and EUC movement. I’m not sure they did that.~
Only 𝜒pods between 29 and 69 m are used in this analysis as deeper 𝜒pods are more strongly influenced by the variability of zEUC than by surface forcing.
TODO
EUC strength is proportional to horizontal visc for 1° models
Add \(ε_χ\) for MOM6
Do for K
composite by TIW strength
start with 10min χpod data, then average to hourly.
make composites off the equator: look at strong off-equatorial du/dx; du/dz
f, ax = plt.subplots(
2, len(datasets) // 2, sharex=True, sharey=True, constrained_layout=True
)
for axx, (name, ds) in zip(ax.flat, tree.children.items()):
da = np.log10(ds["eps_ri"])
# da["Rig_T_bins"] = pd.Index(
# [pd.Interval(np.log10(i.left), np.log10(i.right))
# for i in tao_gridded.Rig_T_bins.data]
# )
(
da.sel(stat="mean")
.sel(enso_transition_phase=["La-Nina cool", "El-Nino warm"])
.plot(hue="enso_transition_phase", marker=".", ax=axx, add_legend=False)
)
axx.set_title(name)
ticks = [0.04, 0.1, 0.25, 0.63, 1.6]
# axx.set_yticks([-7.25, -7, -6.5, -6])
axx.set_ylim([-7.5, -5.5])
axx.set_xticks(np.log10(ticks))
axx.set_xticklabels([f"{a:.2f}" for a in ticks])
# axx.tick_params(axis='x', rotation=30)
ax[0, 0].set_ylabel("ε")
dcpy.plots.linex(np.log10(0.25), ax=ax.flat)
# dcpy.plots.clean_axes(ax)
f.set_size_inches((8, 4))
Shear spectra#
Notes#
Now obvious difference between clockwise and counter-clockwise so I’ve chosen to just do the spectrum of the magnitude
think about tides
what is the “DATM coupling interval”
Kelvin’s KPP parameters
what are the momentum and heat fluxes over some resolvable timescale. And how do we compare to obs.
Frequency spectra @ Z= 50m#
DEPTHS = [-50, -60, -75]
f, ax = plt.subplots(
len(DEPTHS), 1, sharex="col", sharey=True, squeeze=False, constrained_layout=True
)
for name in tree.children.keys():
if name == "tropicheat":
continue
ds = tree[name].ds.reset_coords()
u = ds.cf["sea_water_x_velocity"]
v = ds.cf["sea_water_y_velocity"]
S = (u + 1j * v).cf.sel(Z=DEPTHS, method="nearest").load()
for axx, z in zip(ax, DEPTHS):
dcpy.ts.PlotSpectrum(
np.abs(S).cf.sel(Z=z, method="nearest"),
multitaper=False,
nsmooth=12,
label=name,
lw=0.75,
ax=axx[0],
)
axx[-1].text(x=0.9, y=0.9, s=f"{z} m", transform=axx[-1].transAxes)
ax.ravel()[0].set_ylim([1e-9, None])
ax.ravel()[-1].legend()
dcpy.plots.clean_axes(ax)
f.set_size_inches((4, 6))
f, ax = plt.subplots(
len(DEPTHS), 1, sharex="col", sharey=True, squeeze=False, constrained_layout=True
)
for name in tree.children.keys():
ds = tree[name].ds.reset_coords()
u = ds.cf["sea_water_x_velocity"]
v = ds.cf["sea_water_y_velocity"]
S = (
(u.cf.differentiate("Z") + 1j * v.cf.differentiate("Z"))
.cf.sel(Z=DEPTHS, method="nearest")
.load()
)
for axx, z in zip(ax, DEPTHS):
dcpy.ts.PlotSpectrum(
np.abs(S).cf.sel(Z=z, method="nearest"),
multitaper=False,
nsmooth=12,
label=name,
lw=0.75,
ax=axx[0],
)
axx[-1].text(x=0.9, y=0.9, s=f"{z} m", transform=axx[-1].transAxes)
ax.ravel()[0].set_ylim([1e-9, None])
ax.ravel()[-1].legend()
dcpy.plots.clean_axes(ax)
f.set_size_inches((4, 6))
2D frequency-wavenumber spectra#
S = (u.cf.differentiate("Z") + 1j * v.cf.differentiate("Z")).sel(time="2010")
uniform = (
S.cf.sel(Z=slice(-150, 0))
.cf.interp(Z=np.arange(-150, -52, 5))
.interpolate_na("time")
)
uniform
<xarray.DataArray (time: 8760, depth: 20)> (-0.008227312937378883-0.014154188334941864j) ... (-0.030751610174775124+0.00... Coordinates: * time (time) datetime64[ns] 2010-01-01 ... 2010-12-31T23:00:00 * depth (depth) int64 -150 -145 -140 -135 -130 -125 ... -75 -70 -65 -60 -55
spec = xrft.power_spectrum(
uniform,
dim=(S.cf.axes["Z"][0], "time"),
window="hann",
window_correction=True,
)
sym = (
2
* 2
* spec.isel(
freq_depth=slice(uniform.cf.sizes["Z"] // 2 + 1, None),
freq_time=slice(uniform.cf.sizes["time"] // 2 + 1, None),
)
)
sym.mean("freq_depth").rolling(freq_time=5).mean(center=True, min_periods=5).plot(
xscale="log", yscale="log"
)
[<matplotlib.lines.Line2D object at 0x2b8bababd5a0>]
import dcpy.ts
sym.plot(
xscale="log",
yscale="log",
norm=mpl.colors.LogNorm(),
robust=True,
cmap=mpl.cm.turbo,
)
Vertical wavenumber spectra#
think about vertical migration of EUC
ds = tree["kpp.lmd.004"].ds
ds.S2.sel(time=ds.time.dt.month.isin([10, 11]))
<xarray.DataArray 'S2' (time: 36600, zi: 27)>
dask.array<chunksize=(1464, 26), meta=np.ndarray>
Coordinates: (12/14)
* time (time) datetime64[ns] 2003-10-01T00:30:00 ... 2027-11-30...
xh float64 -140.0
yh float64 0.0625
yq float64 -0.0625
* zi (zi) float64 -230.8 -212.0 -194.4 -177.8 ... -5.0 -2.5 -0.0
eucmax (time) float64 dask.array<chunksize=(1464,), meta=np.ndarray>
... ...
oni (time) float32 0.2936 0.2936 0.2936 0.2936 ... nan nan nan
en_mask (time) bool False False False False ... False False False
ln_mask (time) bool False False False False ... False False False
warm_mask (time) bool True True True True ... True True True True
cool_mask (time) bool False False False False ... False False False
enso_transition (time) <U12 '____________' ... '____________'
Attributes:
long_name: $S^2$
units: s$^{-2}$# f, ax = plt.subplots(1, 2, sharey=True, squeeze=False)
handles = []
for i, name in enumerate(tree.children.keys()):
ds = tree[name].ds.reset_coords()
u = ds.cf["sea_water_x_velocity"].load()
v = ds.cf["sea_water_y_velocity"].load()
S = (
(u.cf.differentiate("Z") + 1j * v.cf.differentiate("Z"))
.cf.sel(Z=slice(-200, 0))
.cf.interp(Z=np.arange(-105, -52, 5))
)
Zname = S.cf.axes["Z"][0]
spec = xrft.power_spectrum(
S,
dim=S.cf.axes["Z"],
window="hann",
window_correction=True,
).mean("time")
sym = 2 * spec.isel({f"freq_{Zname}": slice(S.cf.sizes["Z"] // 2 + 1, None)})
sym.name = "spectral density"
handles.append(sym.hvplot.line(logx=True, logy=True, label=name))
hv.Overlay(handles).opts(legend_position="right")
# f, ax = plt.subplots(1, 2, sharey=True, squeeze=False)
handles = []
for i, name in enumerate(tree.children.keys()):
ds = tree[name].ds.reset_coords()
u = ds.cf["sea_water_x_velocity"].load()
v = ds.cf["sea_water_y_velocity"].load()
S = (
(u.cf.differentiate("Z") + 1j * v.cf.differentiate("Z"))
.cf.sel(Z=slice(-200, 0))
.cf.interp(Z=np.arange(-105, -52, 5))
)
if "kpp" in name or "TAO" in name:
S = S.sel(time=ds.time.dt.month.isin([10, 11]))
Zname = S.cf.axes["Z"][0]
spec = xrft.power_spectrum(
S,
dim=S.cf.axes["Z"],
window="hann",
window_correction=True,
).mean("time")
sym = 2 * spec.isel({f"freq_{Zname}": slice(S.cf.sizes["Z"] // 2 + 1, None)})
sym.name = "spectral density"
handles.append(sym.hvplot.line(logx=True, logy=True, label=name))
hv.Overlay(handles).opts(legend_position="right")
f, ax = plt.subplots(1, 2, sharey=True, squeeze=False)
for name in tree.children.keys():
ds = tree[name].ds.reset_coords()
u = ds.cf["sea_water_x_velocity"].load()
v = ds.cf["sea_water_y_velocity"].load()
S = u.cf.differentiate("Z") + 1j * v.cf.differentiate("Z")
dcpy.ts.PlotSpectrum(
S.cf.sel(Z=-50, method="nearest").sel(time="2010"),
multitaper=False,
twoside=True,
nsmooth=12,
label=name,
lw=0.75,
ax=ax.ravel(),
)
ax.ravel()[0].set_ylim([1e-10, None])
ax.ravel()[-1].legend()
f.set_size_inches((8, 4))
Seasonal mean heat flux#
remapped.Jq.load()
<xarray.DataArray 'Jq' (time: 174000, zeuc: 100)>
array([[ nan, nan, -0.07841657, ..., nan,
nan, nan],
[ nan, nan, -0.07973828, ..., nan,
nan, nan],
[ nan, nan, -0.08094554, ..., nan,
nan, nan],
...,
[ nan, -0.07447129, nan, ..., nan,
nan, nan],
[ nan, -0.07471326, nan, ..., nan,
nan, nan],
[ nan, -0.07509062, nan, ..., nan,
nan, nan]])
Coordinates:
latitude float64 0.025
longitude float64 -140.0
* time (time) datetime64[ns] 1998-12-31T18:00:00 ... 2018-11-06T17:00:00
* zeuc (zeuc) float64 -247.5 -242.5 -237.5 -232.5 ... 237.5 242.5 247.5import hvplot.xarray
(
remapped.Jq.groupby("time.season")
.mean()
.reindex(season=["DJF", "MAM", "JJA", "SON"])
.cf.plot.line(col="season")
)
<xarray.plot.facetgrid.FacetGrid>
Test out available variables#
KPP_Kheatis non-zero only in KPP_OBL
(mom6140.Tflx_dia_diff * 1025 * 4000).cf.plot(robust=True)
<matplotlib.collections.QuadMesh>
mom6140.Kv_u.cf.plot(norm=mpl.colors.LogNorm(1e-6, 1e-2))
<matplotlib.collections.QuadMesh>
mom6140.Kd_heat.cf.plot(norm=mpl.colors.LogNorm(1e-6, 1e-2))
mom6140.KPP_OBLdepth.plot(color="r")
plt.figure()
mom6140.KPP_kheat.cf.plot(norm=mpl.colors.LogNorm(1e-6, 1e-2))
mom6140.KPP_OBLdepth.plot(color="r")
plt.figure()
(mom6140.KPP_kheat - mom6140.Kd_heat).cf.plot(
robust=True, cmap=mpl.cm.mpl.cm.Reds_r, vmax=0
)
<matplotlib.collections.QuadMesh>
Experiment with manual combining#
import intake
from intake.source.utils import reverse_format
years = []
tiles = []
for file in files[:10]:
p = pathlib.Path(file)
params = reverse_format("__{year}.nc.{tile}", p.stem + p.suffix)
years.append(params["year"])
tiles.append(params["tile"])
years, tiles
import toolz as tlz
bytile = {}
for tile, v in tlz.groupby(tileid, files).items():
bytile[tile] = xr.concat(read_raw_files(v, parallel=True), dim="time")
print("\n".join([hash(ds.yh.data.tolist()) for ds in list(bytile.values())]))
from functools import partial
def hash_coords(ds, axis):
dims = ds.cf.axes[axis]
data = np.concatenate([ds[dim].data for dim in dims])
return hash(tuple(data))
grouped = bytile
for axis, concat_axis in [("X", "Y"), ("Y", "X")]:
grouped = tlz.groupby(partial(hash_coords, axis=axis), grouped.values())
grouped = {
k: cfconcat(v, axis=concat_axis, join="exact") for k, v in grouped.items()
}
break
grouped
cfconcat(list(grouped.values()), "X")
combined = xr.combine_by_coords(list(bytile.values()), combine_attrs="override")
def raise_if_bad_index(combined):
bad = []
for idx in combined.indexes:
index = combined.indexes[idx]
if not index.is_monotonic or index.has_duplicates:
bad.append(idx)
if bad:
raise ValueError(f"Indexes {idx} are either not monotonic or have duplicates")
def tileid(path):
p = pathlib.Path(path)
# print(p.suffix)
# params = reverse_format("__{year}.nc.{tile}", p.stem + p.suffix)
return int(p.suffix[1:]) # params["tile"]
# years.append(params["year"])
# tiles.append(params["tile"])
sorted_files = sorted(files, key=tileid)
for tile, files in groupby(sorted_files, tileid):
print(tile, len(list(files)))
Test out ONI#
Phase labeling#
oni = pump.obs.process_oni().sel(time=slice("2005-Sep", None))
enso_transition = mixpods.make_enso_transition_mask(oni)
mixpods.plot_enso_transition(oni, enso_transition)
Replicate ONI calculation#
close enough!
ersst = xr.tutorial.open_dataset("ersstv5")
monthlyersst = (
ersst.sst.cf.sortby("Y")
.cf.sel(latitude=slice(-5, 5), longitude=slice(360 - 170, 360 - 120))
.cf.mean(["X", "Y"])
.resample(time="M")
.mean()
.load()
)
expected = mixpods.calc_oni(monthlyersst)
actual = pump.obs.process_oni()
actual.plot()
expected.plot()
(actual - expected).plot()
[<matplotlib.lines.Line2D>]
Comparing to Warner and Moum code#
chipod.eps.sel(time="2007-01-14 10:35:00", method="nearest").load()
<xarray.DataArray 'eps' (depth: 5)>
nan 3.929e-07 2.846e-07 2.234e-07 4.491e-07
Coordinates:
time datetime64[ns] 2007-01-14T11:00:00
* depth (depth) float64 -69.0 -59.0 -49.0 -39.0 -29.0
timeSeries (depth) float64 69.0 59.0 49.0 39.0 29.0
lat (depth) float64 0.0 0.0 0.0 0.0 0.0
lon (depth) float64 -140.0 -140.0 -140.0 -140.0 -140.0
Attributes:
long_name: turbulence dissipation rate
standard_name: specific_turbulent_kinetic_energy_dissipation_in_...
ncei_name: turbulent kinetic energy (TKE) dissipation rate
units: W kg-1
FillValue: -9999
valid_min: 1e-12
valid_max: 9.999999999999999e-06
coverage_content_type: physicalMeasurement
grid_mapping: crs
source: inferred from fast thermistor spectral scaling
references: Moum J.N. and J.D. Nash, Mixing measurements on a...
cell_methods: depth: point, time: mean
platform: mooring
instrument: chipodtao_gridded.enso_transition.loc[{"time": slice("2015-12", "2016-01")}]
<xarray.DataArray 'enso_transition' (time: 1488)>
'El-Nino warm' 'El-Nino warm' 'El-Nino warm' ... 'El-Nino warm' 'El-Nino warm'
Coordinates: (12/13)
deepest (time) float64 -300.0 -300.0 -300.0 ... -300.0 -300.0
eucmax (time) float64 -135.0 -140.0 -135.0 ... -140.0 -135.0
latitude float32 0.0
longitude float32 -140.0
mld (time) float64 -9.0 -8.0 -8.0 -8.0 ... nan nan nan nan
mldT (time) float64 -10.0 -8.0 -8.0 -8.0 ... nan nan nan nan
... ...
shallowest (time) float64 -5.0 -5.0 -5.0 -5.0 ... -10.0 -10.0 -10.0
* time (time) datetime64[ns] 2015-12-01 ... 2016-01-31T23:00:00
oni (time) float32 2.53 2.53 2.53 2.53 ... 2.53 2.53 2.53
warm_mask (time) bool True True True True ... False False False
cool_mask (time) bool False False False False ... True True True
enso_transition (time) <U12 'El-Nino warm' ... 'El-Nino warm'
Attributes:
description: Warner & Moum (2019) ENSO transition phase; El-Nino = ONI >...